专利摘要:
A method of analyzing a multispectral image (10) comprises constructing a detection image from values of a signal-to-noise ratio. The values of the signal-to-noise ratio relate to a content of the multispectral image within a window which is determined around each pixel, when the contrast in the window is maximized by Fisher projection. The values of the signal-to-noise ratio are calculated from first and second order integral images, which are themselves calculated only once initially, so that the total amount of computations is reduced. The analysis method is compatible with real-time implementation during successive multispectral image capture, especially for an environmental monitoring mission.
公开号:FR3013876A1
申请号:FR1361735
申请日:2013-11-28
公开日:2015-05-29
发明作者:Marc Bousquet;Maxime Thiebaut;Nicolas Roux;Philippe Foubert;Thierry Touati
申请人:Sagem Defense Securite SA;
IPC主号:
专利说明:

[0001] The present invention relates to a method for analyzing a multispectral image, as well as a computer program for implementing such a method. It also relates to a method and a device for monitoring an environment.
[0002] Monitoring an environment is a common task, especially to detect enemy intrusions. Such monitoring presents particular difficulties when carried out in the terrestrial environment. Indeed, a terrestrial environment such as a countryside landscape can contain a large number of distinct elements with irregular contours such as trees, bushes, rocks, buildings, etc., which make the interpretation of image and search for intruding elements. In addition, under certain circumstances, such as military surveillance, an intruder may be camouflaged to make detection in the landscape more difficult. Commonly, such a camouflage is effective against observation in visible light, especially for wavelengths which are between 0.45 μm (micrometer) and 0.65 μm, and especially around 0.57 μm which corresponds to the maximum sensitivity of the human eye. To successfully detect the intruder element, which is also called target in the jargon of the skilled person, despite a complex landscape and a possible camouflage of the target, it is known to perform a multispectral observation of the environment. Such a multispectral observation consists in simultaneously capturing several images of the same landscape according to different spectral bands, so that a target which does not appear distinctly in the images captured according to certain spectral bands is revealed by the images corresponding to other spectral bands. . Each spectral band can be narrow, with an interval of wavelength values of a few tens of nanometers, or wider or even very wide with a width of several micrometers, especially when the spectral band is located in one of the domains infrared: between 3 pm and 5 pm - 2 - or between 8 pm and 12 pm. It is thus known that an observation in the wavelength range between 0.8 μm and 1.2 μm can be effective in revealing a target in a vegetation environment, while the target is effectively camouflaged against a detection by observation in the visible light range for the human eye. However, such multispectral detection may still be insufficient to allow an operator who is in charge of surveillance to detect the presence of a target in a terrestrial environment. Indeed, in some circumstances, none of the images that are separately associated with the spectral bands show the target sufficiently distinctly for the surveillance operator to detect the target in these images, given the observation time allocated. In the following, each image that corresponds separately to one of the spectral bands is called the spectral image. For such situations, it is still known to improve the efficiency of target detection by presenting to the operator an image that is constructed by Fisher projection. Such a method is known in particular from the article "Some practical issues in anomalous detection and exploitation of regions of interest in hyperspectral images", F. Goudail et al., Applied Optics, Vol. 45, No. 21, pp. 5223-5236. According to this method, the image that is presented to the operator is constructed by combining at each point thereof, called a pixel, the intensities values that are seized separately for several spectral bands, so as to optimize a contrast of the resulting image. Theoretically, this image construction consists in projecting for each pixel the vector of the intensities that have been captured for the selected spectral bands, on an optimal direction in the multidimensional space of the spectral intensity values. This optimal projection direction can be determined from the covariance matrix of the spectral intensities, estimated over the entire image field. This is in fact to seek a maximum correlation between the variations of intensities that are present in the different images captured according to the selected spectral bands. The contrast of the image that is presented to the operator is thus at least equal to that of each separate spectral image, so that the target detection by the operator is both more efficient and more reliable. Alternatively, the optimal projection direction can be sought directly using a usual optimization algorithm, to maximize the image contrast by varying the projection direction in the multidimensional space of the spectral intensities.
[0003] It is known to improve the Fisher projection contrast within a spatial window that is smaller than the entire image. The surveillance operator then selects the window inside the entire image, including its position, depending on the nature of the environment there in the image, and its desire to accentuate the image. search for a potential target. To facilitate such a selection of the position of the window, and also to facilitate the identification and the nature of an intrusion that would occur in this area, it is still known to present the operator a composite image on a screen. Such a composite image may consist of one of the spectral images outside the window, and the portion of multispectral image that results from the projection of Fisher inside the window. Alternatively, several spectral images whose wavelength ranges are contained within the sensitivity range of the human eye, can be used to display a landscape representation in natural or near-natural colors outside the window. But in the composite image that is presented to the operator, the enhanced contrast that is provided by the Fisher projection is restricted inside the window. Because of this restriction, the monitoring operator does not have a visualization of the whole field of view with a stronger contrast. It is then not able to quickly become aware of the extent of an enemy intrusion that is camouflaged, because of the time that is necessary to scan the entire field of observation with windows that are selected and processed successively. From this situation, a general purpose of the invention is then to make reliable the monitoring of an environment from a multispectral image.
[0004] In other words, the general object of the invention is to further reduce the probability of non-detection of an intruder element in a scene that is captured in a multispectral image. More particularly, the purpose of the invention is to provide the surveillance operator with an image of the field of view in real time, in which the contrast is reinforced at every point of this image as a function of the information which is contained in the multispectral image. In other words, the invention aims to provide the operator with an image of the field of view that is optimized in its entirety, easily interpretable, and can be produced with a very short calculation time. Preferably, this duration is compatible with the real-time reception of a video stream from a camera used to continuously capture successive multispectral images.
[0005] To achieve some of these or other goals, a first aspect of the invention provides a novel method of analyzing a multispectral image, when said multispectral image comprises a plurality of spectral images of the same scene but corresponding to spectral intervals. different, each spectral image assigning an intensity value to each pixel, or image point, which is located at an intersection of a row and a column of a matrix of the multispectral image, and a dot of origin being defined at a peripheral boundary angle of the array. According to the invention, a detection image is constructed by assigning a display value to each pixel of a useful area of the array, which display value is obtained from a signal-to-noise ratio which is calculated for this pixel. For this, the method comprises the following steps: / 1 / for each spectral image, calculating an integral image of order one by assigning to each calculation pixel an integral value equal to a sum of the intensity values of this spectral image for all the pixels that are contained in a rectangle having two opposing vertices located respectively on the origin point and on the calculation pixel; and for each pair of spectral images from the multispectral image, calculating a second order integral image by assigning to each calculation pixel another integral value equal to a sum, for all pixels that are contained in the rectangle having two opposite vertices located respectively on the point of origin and on the calculation pixel, of products of the two intensity values relating to the same pixel but respectively allocated by each spectral image of the pair; / 2 / defining a fixed window frame and a mask internal to the frame, this mask defining a target area and a bottom area inside the frame; and / 3 / for each pixel of the useful area of the matrix: / 3-1 / placing the window at a position in the matrix which is determined by the pixel, and the window being limited by the frame defined in step / 2 /; / 3-2 / determine a Fisher factor, in the form of a vector associated with a Fisher projection that increases a contrast of the multispectral image in the window between the target area and the background area; / 3-3 / calculate from the integral values read in the first and second order integral images: - two mean vectors, respectively for the target area and the background area, each having a coordinate for each spectral image, which is equal to an average of the intensity values of this spectral image, calculated for the pixels of the target area or the background area, respectively; an average matrix, having a factor for each pair of spectral images, which is equal to an average of the products of the two intensity values relative to each pixel but respectively allocated by each spectral image of the couple, calculated for the pixels of the background area; / 3-4 / then calculate: - two mean values of Fisher, mFT and mFB, respectively for the target zone and for the bottom zone, each equal to a scalar product result between the Fisher factor and the - 6 - mean vector for the target area or for the background area, respectively; a Fisher variance on the bottom area, VarFB, equal to a quadratic product result of the mean matrix by the Fisher factor, minus one square of the Fisher mean for the bottom area; and / 3-5 / calculate the signal-to-noise ratio for the pixel of the useful area, equal to [(mFT-mFB) 2 / VarFB] 1 "2. These values of the signal-to-noise ratio are then used. to construct the detection image, pixel by pixel in the useful area of the matrix, Thus, a first characteristic of the invention consists in presenting to the surveillance operator an image which is entirely composed of the values of the signal-to-signal ratio. noise resulting from local processing by Fisher projection This image, called the detection image, is homogeneous in its nature and its method of calculation A second characteristic of the invention consists in proposing a method for calculating the signal-to-signal ratio. on-noise, which is based on integral images.In a first step, the spectral images are converted into integral images of order one and two, then the signal-to-noise ratio is calculated from these integral images. , the sums of values of intensities which are c They are mapped to large numbers of pixels only once, then the results of these sums are read and combined to obtain the value of the signal-to-noise ratio for each pixel. With this structure of the analysis method, it can be performed very quickly without requiring significant computation means. In particular, the analysis method of the invention is compatible with real-time reception of a video stream from a camera that is used to capture multispectral images continuously. Furthermore, the use of a window which is smaller than the matrix, to calculate the display value of each pixel in the detection image, allows this detection image to be easily interpretable visually by the operator of the image. The surveillance. In other words, most of the patterns that are contained in the detection image can be easily recognized by the operator. Preferably, the window frame may have dimensions that are between one fifth and one fiftieth of those of the matrix parallel to the rows and columns.
[0006] To increase a contrast of the detection image, the mask can be advantageously defined in step / 2 / so that the target zone and the bottom zone are separated by an intermediate zone inside the window frame . In particular embodiments of the invention, the display value can be obtained from the signal-to-noise ratio for each pixel of the detection image, using one of the following methods or a combination of these: - compare the signal-to-noise ratio with a threshold, and the display value is taken equal to zero if this signal-to-noise ratio is lower than the threshold, otherwise the display value is taken equal to the signal-to-noise ratio; or - apply a linear scale conversion to the signal-to-noise ratio, and the display value is taken equal to a result of this conversion. In preferred implementations of the invention, the Fisher factor can itself be calculated in step / 3-2 for each pixel of the useful area, from the integral values read in the integral images. The execution of the analysis method of the invention is thus even faster. In particular, the Fisher factor can be calculated in step / 3-2 / for each pixel of the useful area, in the form of a product between, on the one hand, a line vector which results from a difference between the average vectors calculated for the target area and the background area, and on the other hand, an inverted covariance matrix. The covariance matrix considered has a factor for each pair of spectral images from the multispectral image, which is equal to a covariance of the spectral intensity values assigned respectively by each spectral image of the couple, calculated for the pixels of the background area. A second aspect of the invention provides a support readable by one or more processors, which comprises codes written on this medium and adapted to control the processor (s), an execution of a corresponding analysis method in the first aspect of the invention. This second aspect of the invention is therefore a computer program which has a nature of commercial product or recoverable in any form.
[0007] A third aspect of the invention provides a method of monitoring an environment, which comprises the steps of: - simultaneously capturing a plurality of spectral images of the environment, so as to obtain a multispectral image; analyzing the multispectral image using an analysis method according to the first aspect of the invention; and - display the detection image on a screen, for a monitoring operator who observes screen. The monitoring method may further comprise a comparison of the display value of each pixel in the detection image with an alert threshold. Then, a pixel may further be displayed in this detection image with a color that is changed, flashed or overlaid if its display value is greater than the alert threshold. The attention of the surveillance operator is thus drawn even more to this place in the field of observation, to determine if an intruder element is present there. Finally, a fourth aspect of the invention proposes a device for monitoring an environment, which comprises: means for storing a multispectral image formed of several spectral images of the same scene, which are associated with separate spectral intervals; a screen comprising pixels which are located respectively at intersections of rows and columns of a matrix; an image processing system, which is adapted to calculate first and second order integral images from the spectral images, and to store these integral images; - 9 - - means for defining a window frame and a mask internal to the frame, the mask defining a target area and a bottom area inside the frame; and a control system which is adapted to implement step / 3 / of an analysis method according to the first aspect of the invention, and to display a detection image on the screen, in which each The pixel of a useful area of the matrix has a display value which is obtained from the signal-to-noise ratio calculated for this pixel. Other features and advantages of the present invention will appear in the following description of nonlimiting examples of implementation, with reference to the accompanying drawings, in which: - Figure 1 is a schematic representation of a multispectral image; FIG. 2 represents a display screen used to implement the invention; FIGS. 3a and 3b illustrate principles of integral image construction used to implement the invention; FIG. 4 represents a mask that can be used to calculate a contrast inside a window, in particular modes of implementation of the invention; FIG. 5 is a block diagram of the steps of a method according to the invention; FIG. 6 shows a sequence of calculations carried out in certain steps of the method of FIG. 5; Figure 7 illustrates a mean vector calculation principle that is used in the present invention; and FIG. 8 illustrates the principle of a Fisher projection in a two-dimensional space of spectral intensities. For the sake of clarity, the dimensions of the elements shown in some of these figures correspond neither to actual dimensions nor to actual dimensional ratios. In addition, identical references which are indicated in different figures designate identical elements or which have identical functions. Reference numeral 10 in FIG. 1 globally designates a multispectral image formed of a plurality of individual images 11, 12, 13, ... that have been simultaneously captured for the same scene. In other words, the individual images 11, 12, 13, ... have been captured by imaging channels arranged in parallel, activated at the same time and having the same input optical field. However, each image 11, 12, 13, ... has been captured by selecting a portion of the radiation from the scene, which is separated from each portion of the radiation used for another of the images 11, 12, 13, .. This separation is carried out as a function of the wavelength λ of the radiation, in one of the ways known to those skilled in the art, so that each of the images 11, 12, 13, ..., called the spectral image, was captured with the radiation whose wavelength belongs to a distinct interval, preferably without overlap with each of the intervals of the other spectral images. The number of the spectral images 11, 12, 13,... May be arbitrary, for example equal to twenty spectral images, each associated with a wavelength interval whose width may be between a few nanometers and a few tens of times. nanometers, or more. Such a multispectral image can also be called hyperspectral, depending on the number of spectral images that compose it and the width of each of their wavelength ranges. For example, A1, A2, A3, ... denote central values for the respective wavelength ranges of the spectral images 11, 12, 13, ... Depending on the application of the invention, these intervals may be between 400 nm (nm) and 1 pm (micrometer), or between 400 nm and 2.5 pm, for example. Each spectral image 11, 12, 13, ... can be processed for the invention from a file that is read on a storage medium, or from a digital stream that is produced by a storage device. shooting at a video rate. Depending on the case, the image data may be raw data produced by one or more image sensors, or data already processed for certain operations such as the reframing of the spectral images with respect to each other. correction of over- or under-exposure, etc. FIG. 2 shows a display screen for the multispectral image 10. It comprises a matrix 1 of image points 2, or pixels 2, which are arranged at intersections of rows and columns of the matrix. For example, the matrix 1 may have 500 columns and 500 rows of pixels 2. x and y respectively denote the direction of the rows and that of the columns. An origin point O is set at an angle of the peripheral boundary of the matrix 1, for example at the top left. Each of the spectral images 11, 12, 13, ... separately assigns an intensity value to each pixel 2 of the matrix 1. 1 0 Such an allocation is direct if each spectral image is directly entered according to the matrix 1, or can to be indirect in the case where at least some spectral images 11, 12, 13, ... are seized according to a different matrix. In this case, an intensity value for each pixel 2 of the matrix 1 can be obtained by interpolation, for each spectral image whose initial matrix is different from the matrix 1. An integral image is calculated separately for each spectral image 11 , 12, 13, ..., according to a common calculation principle which is known to those skilled in the art. Reference 111 in Fig. 3a denotes the integral image which is calculated from the spectral image 11. The value of the pixel P in the integral image 111 is calculated as the sum (sign> in Fig. 3a) of all the intensity values in the image 11, for the pixels 2 of the matrix 1 which are contained in a rectangular zone R whose two opposite angles are the origin point O and the pixel P itself. The integral images thus calculated were named first order integral images in the general description of the invention, and the values of each of them for the pixels 2 of the matrix 1 were called integral values. FIG. 3b illustrates the principle of calculation of the second order integral image which is obtained from the spectral images 11 and 12. This integral image is indicated by the reference (11x12) I. Its integral value for the pixel P is calculated as follows: for each pixel 2 of the rectangular zone R whose opposite angles are the origin point O and the pixel P itself, the product of the two values of The intensities which are attributed to this pixel 12 by the spectral images 11 and 12 are calculated (product operator in FIG. 3b). Then these products are added together for area R, and the result of the sum is the integral value of the integral image (11x12) I at pixel 2. Second order integral images are calculated in a similar way for all couples possible spectral images 11, 12, 13, ..., including couples whose two spectral images are the same. Of course, couples whose two images are identical from one couple to another but selected in the reverse order correspond to identical two-dimensional integral images, so that they are calculated only once. times. All first and second order integral images are stored or stored (step S1 of FIG. 5), so that the integral values can be read quickly in the further execution of the method. Windows F1, F2, F3, ... are then successively selected inside the matrix 1, to form a scan of the matrix (FIG. 2). Each window is defined by a frame whose dimensions are preferably fixed, and a position of placement of the frame in the matrix 1. For example, the window F1 can be placed against the corner end at the top left of the matrix 1, then the window F2 is obtained by shifting the window frame of a column to the right with respect to the window F1, etc., until the window frame comes into abutment against the right edge of the matrix 1 The scan can then be continued by returning to the left edge of the matrix 1, but a lower pixel line, and so on. By associating each placement of the frame with the pixel 2 of the matrix 1 which is located at the center of the window thus formed, a useful zone marked ZU is progressively scanned by the successive window centers, excluding a complementary zone ZC which results from edge effects. The zone ZU corresponds to the extent of the detection image which is constructed according to the invention. A fixed mask is also defined inside the window frame, and is transposed with the latter at each position of the frame in the matrix 1 when a different window is selected. This mask defines a target zone denoted T and a bottom zone denoted B inside the window frame. For example, as shown in FIG. 4, the target area T and the bottom area B may have respective square boundaries, be concentric and separated by an intermediate area denoted by J. nxB and nyB denote the external dimensions of the bottom zone B along the x and y directions. They correspond to the dimensions of the window frame, for example each equal to 51 pixels. nXT and ny-r denote the dimensions of the target zone T in the directions x and y, for example each equal to 7 pixels, and nxj and nyj denote the external dimensions of the intermediate zone J, for example each equal to 31 pixels . The values of nxB and nyB, nXT and ny-r, nxj and nyJ can be selected based on an assumption about the size of an intruder element that is sought in the imagery scene, and its assumed distance away. The construction of the detection image is now described with reference to FIGS. 5 and 6. The same sequence of steps, described for the window F1, is repeated for each pixel 2 of the useful area ZU. It consists in determining a maximum contrast that exists between the contents of the multispectral image 10 located in the target zone T and in the bottom zone B. The value of this maximum contrast is then used to display the pixel considered in the detection image.
[0008] The maximum value of the contrast is that which results from the application of the Fisher projection to the multispectral content of the F1 window. For this, the Fisher projection is itself determined first. It can be determined using one of the methods already known. However, the method that is described now is preferred because it exploits the integral images that have already been calculated. The intensity values of each pixel 2 of the target area T and the bottom area B, in the window F1, are considered for the d spectral images 11, 12, 13, ... which together constitute the image multispectral 10, d being an integer greater than or equal to two. A vector of spectral intensities is then constructed for the target area T in the following way: it has a separate coordinate for each of the spectral images, and this coordinate is equal to an average of the intensity values of all the -14- pixels of the target area T in this spectral image. Thus, the following vector can be constructed: Xi (i) jET = 1 IIET X2 (i) 1TIT NT Xd (i) jET where i is an index which numbers the pixels of the target area T, xi (i), x2 (i), ..., xd (i) are the intensity values of the pixel i respectively in the spectral images 11, 12, 13, ..., and NT is the number of pixels of the target area T. In a space of intensities for the spectral images, the vector mT corresponds to an average position of the vectors of spectral intensities of all the pixels of the target zone T. In other words, mT is the average vector of the spectral intensities for the zone In known manner, each coordinate of the vector mT can be calculated directly from the corresponding integral image of order one, as follows which is illustrated in FIG. 7: xk (i) = Imintk (A ) + Imintk (C) - Imintk (B) - Imintk (D) jET where k is the index of the coordinate of the vector mT, less than or equal to d, A, B, C and D are the pixels of the som of the window F, lm intk (A) is the integral value at pixel A read in the integral image k of order one, and so on. in the same way for the pixels B, C and D. Another vector of spectral intensities is constructed analogously for the bottom area B: 1 B NB Where NB is the number of pixels of the bottom area B. In general, it is different from the NT number, but not necessarily. The vector mB corresponds in the same way to an average position of the vectors of spectral intensities of all the pixels of the bottom area B. It is called average vector of the spectral intensities for the bottom area B. The vector mB can also be calculated from the integral images of order one, in a way that is adapted to the shape of the bottom area B but is easily accessible to the skilled person. The vectors mT and mB are arranged in column and each have coordinates.
[0009] The following covariance matrix is also constructed from the intensity values of the pixels 2 of the bottom area B: Co varB = Var (xi, x1) Co var (x2, x1) - - - Co var (xd, xi) ) Co var (xd, x2) Co var (xi, x2) Var (x2, x2) - - - Var (xd, xd) Co var (x 1, xd) Co var (x2, xd) - - - where Var (xi, xi) denotes the variance of the intensity values of the spectral image 11 calculated on the pixels of the bottom area B, Covar (xi, x2) denotes the covariance of the respective intensity values of the spectral images 11 and 12 calculated on the pixels of the bottom area B, etc. for all pairs of index values selected from 1, 2, ... d. The CovarB matrix is square of dimension d. It can be determined in a manner that is still known to those skilled in the art, from the second order integral images and the components of the vector mB. For subsequent use, an average matrix MMB is also calculated, each factor being equal to an average, on the pixels of the bottom zone B, of the product of the intensity values of the two spectral images which correspond to the position of this image. factor in the MMB matrix. In other words, the factor of this average matrix MMB, which is located on the kth row and the rth column thereof, is: (MMB) k 1 r Xk (i) - Xr (i) NB iel3 k and r being two integers between 1 and d, these limit numbers being included. The matrix MMB is square, symmetrical and of dimension d.
[0010] The intermediate calculations of the mean vectors of spectral intensities mT and mB, the covariance matrix CovarB, and the average matrix MMB, belong to the step S2 of FIGS. 5 and 6. The Fisher PFisher projection can also be determined in FIG. this step S2, as follows: PFisher = (ffn-Tin-B) - Co varB where the exponent t denotes the transpose of a vector, the exponent -1 denotes the inverse of a matrix , and - denotes the matrix product operation, here applied between the on-line vector (mr-mB) t and the CovarB-1 matrix. The Fisher projection thus expressed is an online vector with coordinates. Usually, it is intended to be applied to the vector of the intensity values of each pixel of the window F1 in the form of the following matrix product: PFisher (i) = PFisher where j denotes any pixel in the window F1, and PFisher ( j) is the intensity value for the pixel j which results from the Fisher projection applied to the vector of intensity values for this same pixel j in the d spectral images. In the jargon of those skilled in the art, the PFisher projection that is determined for the selected window is called the Fisher factor, and the set of PFisher intensity values (j) that is obtained for all the pixels j of the window. is called Fisher's matrix. FIG. 8 illustrates this Fisher projection in the spectral intensity space, for the example case of two spectral images: d = 2. The notations that were introduced above are included in this figure. The concentric ellipses which are referenced T and B symbolize level curves associated with constant values for numbers of pixels in the target zone T and in the bottom zone B. The Fisher matrix which is thus obtained for the window F1 is the representation of the content of the multispectral image 10 inside the window F1, which has the maximum contrast. Dm-r-rns is the direction on which the vectors of spectral intensities of the pixels are projected, according to the method of Fisher. The invention is based on the use of the contrast of the Fisher matrix, but without it being necessary to calculate it itself to obtain the value of its contrast. A significant gain in calculation time results, which is one of the advantages of the invention. The contrast of the Fisher matrix in each window is defined according to the following formula: SNR = 11 (111FT mFB) 2 VarFB where mFT is the mean of the intensity values of the Fisher matrix for the target area T in the window considered, ie: 1 InFT 1PFisher (i) NT ieT mFB is the average of the intensity values of the Fisher matrix for the bottom area B in the window under consideration, i.e. : 1 MFB 1PFis h '(i) NB iEB and VarFB is the variance of the intensity values of the Fisher matrix for the bottom area B in the window under consideration, ie: VarFB = (- MFB2 The contrast SNR of the Fisher matrix is the signal-to-noise ratio which is quoted in the general description of the present invention and is assigned to the pixel 2 of the useful zone ZU, on which is centered the window to which it relates.But the inventors discovered that the averages mFT and mFB, and that of the first term of the varian this VarFB could be expressed in the following ways: - the mean mFT is equal to the Fisher factor applied to the average vector of the spectral intensities for the target area T: mFT = PFisher'n1T, where - denotes the dot product operation between the PFisher vector-line and mT vector-column; similarly, the average mFB is equal to the Fisher factor applied to the average vector of the spectral intensities for the target area B: mFB = PFisher.MB; and - the first term of the variance VarFB is equal to the quadratic product of the average matrix MMB by the Fisher factor: NB PFisher (i) 2 PFisher MMB PFishert v B ieB where in the second members, mT, mB, MMB and PFisher were calculated in step S2 for the pixel 2 on which the current window is centered. PFishert is the column vector that is associated with the PFisher Fisher factor. Step S3 consists of calculating the averages mFT and mFB, as well as the variance VarFB using these formulas. Figure 8 also shows the mean values of Fisher mFT and mFB along the Dm-r-mB direction. The signal-to-noise ratio SNR is then calculated in step S4. It is thus obtained for each pixel 2 of the useful zone ZU, solely from the integral images of order one and two. These integral images, themselves, have been previously calculated only once, in step S1. An economy of calculations is obtained in this way, which is considerable and thanks to which the method of the invention is compatible with a real-time execution as successive multispectral images are captured, received in a video stream or read on a recording medium.
[0011] Optionally, the signal-to-noise values SNR can themselves be processed in step S5, in particular with the aim of making the detection image even more easily understandable for the surveillance operator. Such processing can consist in comparing each value of the SNR report with a threshold, or several thresholds, and modifying this value of the SNR report according to the comparison. For example, the value of the SNR for any one of the pixels of the usable area ZU can be reduced to zero when it is initially less than a first threshold. Simultaneously, the value of the SNR report can be increased to a maximum value when it exceeds a second threshold, so that the pixel concerned appears more clearly in the detection image. The attention of the surveillance operator can thus be drawn more strongly to this location of the detection image. The first and / or second threshold may be fixed or determined according to a statistical study of all the values of the SNR report that have been obtained for all the pixels 2 of the useful area ZU. For example, each threshold can be calculated according to an average value of the SNR ratio, calculated on the pixels 2 of the useful area ZU. The processing of the values of the SNR report may also include a linear scale conversion, in particular to make the amplitude of variation of the SNR ratio coincide with the useful area ZU, with a magnitude of intensity of display of the pixels in the image detection. Thus, the null value for the display intensity can be attributed to the minimum value that has been obtained for the SNR report in the useful area ZU, and the maximum value of display intensity can be simultaneously assigned to the value maximum reached by the SNR report in the useful area ZU. A display intensity value then results from such a linear conversion, for each value of the intermediate SNR ratio between the two minimum and maximum values. Possibly, thresholding and scale conversion processes can be combined. The detection image is then constructed by assigning a display value to each pixel 2 in the useful area ZU of the matrix 1. This display value can be directly equal to the value of the SNR report which has been obtained for this purpose. pixel in step S4, or result from this value of the SNR report following the processing of step S5. The detection image is then displayed on the screen in step S6, to be observed by the monitoring operator. Optionally, this detection image can be displayed alternately with a representation of the monitoring field that reproduces a visual perception by the human eye, that is to say a perception from light that belongs to the visible range of length. wave and which comes from the field of view. Thus, an identification of the operator in the surveillance field is facilitated, to locate elements that are revealed by the detection image while these elements are invisible directly to the human eye.
[0012] Optionally, various effects may be added to the detection image to further highlight areas of the detection image where the signal-to-noise ratio has produced display values that are above an alert threshold. These effects can be a color display of the pixels concerned - and / or a blinking thereof and / or the addition of an overlay. The warning threshold may be predefined or derived from a statistical study of the signal-to-noise ratio values or display values in the useful zone ZU. It is understood that the invention can be reproduced by changing secondary aspects of the embodiments which have been described in detail above, while retaining the main advantages which have been mentioned and which are still recalled: an image detection method that is consistent and homogeneous in construction, is presented to the surveillance operator to reveal camouflaged elements present in the field of view; these uncovered elements can be recognized by the operator according to the detection image; and the detection image can be calculated and presented to the operator in real time during the surveillance mission.
权利要求:
Claims (10)
[0001]
REVENDICATIONS1. A method of analyzing a multispectral image (10), said multispectral image comprising a plurality of spectral images (11, 12, 13, ...) of the same scene but corresponding to different spectral intervals, each spectral image assigning a value of intensity at each pixel (2) which is located at an intersection of a row and a column of a matrix (1) of the multispectral image, and a point of origin (0) being defined at a peripheral limit angle of the matrix; a method according to which a detection image is constructed by assigning a display value to each pixel of a useful area (ZU) of the matrix, said display value being obtained from a calculated signal-to-noise ratio for said pixel; the method comprising the following steps: / 1 / for each spectral image, calculating an integral image of order one by assigning to each calculation pixel an integral value equal to a sum of the intensity values of said spectral image for all the pixels contained in a rectangle having two opposing vertices located respectively on the origin point and on said calculation pixel; and for each pair of spectral images from the multispectral image, calculating a second order integral image by assigning to each calculation pixel another integral value equal to a sum, for all the pixels contained in the rectangle having two vertices opposed respectively located on the origin point and on said calculation pixel, products of the two intensity values relating to the same pixel but respectively allocated by each spectral image of the pair; / 2 / defining a fixed window frame and a mask internal to the frame, said mask defining a target area and a bottom area within said frame; and- 22 - / 3 / for each pixel (2) of the usable area (ZU) of the matrix (1): / 3-1 / placing a window at a position in the matrix which is determined by the pixel, said window being constrained by the framework defined in step / 2 /; / 3-2 / determine a Fisher factor, in the form of a vector associated with a Fisher projection that increases a contrast of the multispectral image in the window between the target area and the background area; / 3-3 / calculate from the integral values read in the first and second order integral images: - two mean vectors, respectively for the target area and the background area, each having a coordinate for each spectral image, which is equal to an average of the intensity values of said spectral image, calculated for the pixels of the target area or the background area, respectively; an average matrix, having a factor for each pair of spectral images, which is equal to an average of the products of the two intensity values relative to each pixel but respectively allocated by each spectral image of the couple, calculated for the pixels of the background area; / 3-4 / then calculate: - two mean values of Fisher, mFT and mFB, respectively for the target area and for the bottom area, each equal to a scalar product result between the Fisher factor and the mean vector for the target area or for the bottom area, respectively; a Fisher variance on the bottom area, VarFB, equal to a quadratic product result of the mean matrix by the Fisher factor, minus one square of the Fisher mean for the bottom area; and- 23 - / 3-5 / calculate the signal-to-noise ratio for the pixel of the useful area equal to [(mFT-mFB) 2 / VarFB] 1 "2.
[0002]
The analysis method according to claim 1, wherein the display value is obtained from the signal-to-noise ratio for each pixel of the detection image, using one of the following methods or a combination said methods: - comparing the signal-to-noise ratio with a threshold, and the display value is taken equal to zero if said signal-to-noise ratio is less than the threshold, otherwise said display value is taken equal audit report signal-on-noise; or - apply a linear scale conversion to the signal-to-noise ratio, and the display value is taken equal to a result of the conversion.
[0003]
3. Analysis method according to claim 1 or 2, wherein the window frame has dimensions between one fifth and one fiftieth of dimensions of the matrix parallel to the rows and columns.
[0004]
4. Analysis method according to any one of the preceding claims, wherein the mask is defined in step / 2 / so that the target zone and the bottom zone are separated by an intermediate zone inside. window frame.
[0005]
5. Analysis method according to any one of the preceding claims, wherein the Fisher factor is itself calculated in step / 3-2 / for each pixel of the useful area, from the integral values read in the integral images.
[0006]
6. Analysis method according to claim 5, wherein the Fisher factor is calculated in step / 3-2 / for each pixel of the useful area, in the form of a product between, on the one hand, a vector-line resulting from a difference between the average vectors calculated for the target area and the bottom area, and, on the other hand, an inverted covariance matrix, said covariance matrix having a factor for each pair of spectral images from the multispectral image, which is equal to a covariance of the spectral intensity values assigned respectively by each spectral image of the couple, calculated for the pixels of the background area.
[0007]
7. Computer program product, comprising a support readable by one or more processors, and codes written on said support and adapted to control one or more processors, an execution of an analysis method according to any one of Claims 1 to 6.
[0008]
8. A method of monitoring an environment, comprising the following steps: - simultaneously capture a plurality of spectral images of the environment, so as to obtain a multispectral image; analyzing the multispectral image using an analysis method according to any one of claims 1 to 6; and - display the detection image on a screen, for a monitoring operator who observes screen.
[0009]
The monitoring method of claim 8, further comprising comparing the display value of each pixel in the detection image with an alert threshold, and wherein a pixel is further displayed in the image. detection with a modified color, a flashing or an overlay if the display value of said pixel is greater than the alert threshold.
[0010]
10. Device for monitoring an environment, comprising: - storage means of a multispectral image (10) formed of several spectral images (11, 12, 13, ...) of the same scene, which are associated with separate spectral intervals; a screen comprising pixels (2) located respectively at intersections of rows and columns of a matrix (1); an image processing system adapted to calculate first and second order integral images from the spectral images (11, 12, 13, ...) and to store said integral images; means for defining a window frame and a mask internal to the frame, said mask defining a target area and a bottom area within said frame; and a control system adapted to implement step / 3 / of an analysis method which is according to any one of claims 1 to 6, and to display a detection image on the screen, wherein each pixel (2) of a useful area (ZU) of the matrix (1) has a display value which is obtained from the signal-to-noise ratio calculated for said pixel.
类似技术:
公开号 | 公开日 | 专利标题
FR3013876A1|2015-05-29|ANALYSIS OF A MULTISPECTRAL IMAGE
EP3074920B1|2018-01-31|Analysis of a multispectral image
Cheng et al.2014|Cloud removal for remotely sensed images by similar pixel replacement guided with a spatio-temporal MRF model
Gladkova et al.2011|Quantitative restoration for MODIS band 6 on Aqua
EP3200153B1|2018-05-23|Method for detecting targets on the ground and in motion, in a video stream acquired with an airborne camera
CA2926302C|2019-08-13|Method for viewing a multi-spectral image
Yang et al.2014|River delineation from remotely sensed imagery using a multi-scale classification approach
FR2982394A1|2013-05-10|SEARCH FOR A TARGET IN A MULTISPECTRAL IMAGE
EP3114831B1|2021-06-09|Optimised video denoising for heterogeneous multisensor system
EP2555161A1|2013-02-06|Method and device for calculating a depth map from a single image
S Bhagat2012|Use of remote sensing techniques for robust digital change detection of land: A review
Ma et al.2021|Deep learning for geological hazards analysis: Data, models, applications, and opportunities
EP3216213B1|2020-12-30|Method for detecting defective pixels
EP3591610B1|2021-01-27|Method for generating a multispectral image
Bertoluzza et al.2017|Circular change detection in image time series inspired by two-dimensional phase unwrapping
Rodríguez et al.2012|Foliage penetration by using 4-D point cloud data
Denaro et al.2019|Hybrid Canonical Correlation Analysis and Regression for Radiometric Normalization of Cross-Sensor Satellite Images
EP3072110B1|2018-04-04|Method for estimating the movement of an object
Norman2012|The detection of forest structures in the Monongahela National Forest using LiDAR
WO2021005096A1|2021-01-14|Method for three-dimensional representation of the spatial coverage of one or more detection systems
FR3075438A1|2019-06-21|HYBRID SYNTHESIS METHOD OF IMAGES
FR3075437A1|2019-06-21|IMAGE SYNTHESIS METHOD
Mercovich2012|Techniques for automatic large scale change analysis of temporal multispectral imagery
Santosa et al.2009|Image fusion methods for land cover classification and its potential for slope failure detection on a mountainous terrain
WO2013076283A2|2013-05-30|Method for locating a physical element on an area of land
同族专利:
公开号 | 公开日
WO2015078927A1|2015-06-04|
EP3074921B1|2018-01-31|
FR3013876B1|2016-01-01|
US9922407B2|2018-03-20|
IL245870A|2017-03-30|
CA2931741A1|2015-06-04|
EP3074921A1|2016-10-05|
US20170024867A1|2017-01-26|
IL245870D0|2016-06-30|
CA2931741C|2019-06-25|
引用文献:
公开号 | 申请日 | 公开日 | 申请人 | 专利标题
FR2982393A1|2011-11-09|2013-05-10|Sagem Defense Securite|SEARCH FOR A TARGET IN A MULTISPECTRAL IMAGE|WO2016198803A1|2015-06-10|2016-12-15|Sagem Defense Securite|Method for displaying a multispectral image|US7200243B2|2002-06-28|2007-04-03|The United States Of America As Represented By The Secretary Of The Army|Spectral mixture process conditioned by spatially-smooth partitioning|
US8897571B1|2011-03-31|2014-11-25|Raytheon Company|Detection of targets from hyperspectral imagery|
US8670628B2|2011-08-16|2014-03-11|Raytheon Company|Multiply adaptive spatial spectral exploitation|
US9721319B2|2011-10-14|2017-08-01|Mastercard International Incorporated|Tap and wireless payment methods and devices|
FR3011663B1|2013-10-07|2015-11-13|Sagem Defense Securite|METHOD FOR VISUALIZING A MULTISPECTRAL IMAGE|
FR3013876B1|2013-11-28|2016-01-01|Sagem Defense Securite|ANALYSIS OF A MULTISPECTRAL IMAGE|
FR3013878B1|2013-11-28|2016-01-01|Sagem Defense Securite|ANALYSIS OF A MULTISPECTRAL IMAGE|FR3013878B1|2013-11-28|2016-01-01|Sagem Defense Securite|ANALYSIS OF A MULTISPECTRAL IMAGE|
FR3013876B1|2013-11-28|2016-01-01|Sagem Defense Securite|ANALYSIS OF A MULTISPECTRAL IMAGE|
GB2534903A|2015-02-05|2016-08-10|Nokia Technologies Oy|Method and apparatus for processing signal data|
KR101699528B1|2015-06-30|2017-01-24|삼성전자 주식회사|Magnetic resonance imaging apparatus and generating method for magnetic resonance image thereof|
FR3043823B1|2015-11-12|2017-12-22|Sagem Defense Securite|METHOD FOR DECAMOUFLING AN OBJECT|
EP3378222A4|2015-11-16|2019-07-03|Orbital Insight, Inc.|Moving vehicle detection and analysis using low resolution remote sensing imagery|
CN108256419B|2017-12-05|2018-11-23|交通运输部规划研究院|A method of port and pier image is extracted using multispectral interpretation|
法律状态:
2015-10-23| PLFP| Fee payment|Year of fee payment: 3 |
2016-10-24| PLFP| Fee payment|Year of fee payment: 4 |
2017-03-03| CD| Change of name or company name|Owner name: SAGEM DEFENSE SECURITE, FR Effective date: 20170126 |
2017-10-20| PLFP| Fee payment|Year of fee payment: 5 |
2018-10-24| PLFP| Fee payment|Year of fee payment: 6 |
2019-10-22| PLFP| Fee payment|Year of fee payment: 7 |
2020-10-21| PLFP| Fee payment|Year of fee payment: 8 |
2021-10-20| PLFP| Fee payment|Year of fee payment: 9 |
优先权:
申请号 | 申请日 | 专利标题
FR1361735A|FR3013876B1|2013-11-28|2013-11-28|ANALYSIS OF A MULTISPECTRAL IMAGE|FR1361735A| FR3013876B1|2013-11-28|2013-11-28|ANALYSIS OF A MULTISPECTRAL IMAGE|
PCT/EP2014/075706| WO2015078927A1|2013-11-28|2014-11-26|Analysis of a multispectral image|
US15/100,201| US9922407B2|2013-11-28|2014-11-26|Analysis of a multispectral image|
CA2931741A| CA2931741C|2013-11-28|2014-11-26|Analysis of a multispectral image|
EP14803124.8A| EP3074921B1|2013-11-28|2014-11-26|Analysis of a multispectral image|
IL245870A| IL245870A|2013-11-28|2016-05-26|Analysis of a multispectral image|
[返回顶部]